import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
= np.linspace(0, 1)
x = 0.1 #set small amount of beam hardening
lam = x
mu def beamHardening(x, lam=1): #here we cannot separate mu and penetration distance so we have to assume them to be equal
return x/(1+lam*x)
='ideal')
plt.plot(x, mu, label= 'real')
plt.plot(x, beamHardening(x, lam), label 'Penetration depth [cm]')
plt.xlabel('Measured value')
plt.ylabel(
plt.legend() plt.show()
Now let’s add an imaging simulation example
- this notebook is available as a colab notebook
Now that we’ve seen how a nonlinear 1D signal can be made linear by using Vandermonde interpolation, we can quickly expand our discussion to images by treating each pixel as a 1D function of intensity to be corrected.
Let’s first see how a nonlinearity introduced in our project data is reflected in the reconstructed image…
We start with a simplified model of beam hardening described in: https://aapm.onlinelibrary.wiley.com/doi/full/10.1118/1.2742501
where the polychromatic attenuation coefficient \(\langle \mu \rangle\) can be modeled in terms of the true attenuation coefficient \(\mu\) and penetration distance \(x\)
\[\begin{equation} \langle \mu \rangle = \frac{\mu_0}{1+\lambda x} \end{equation}\]
Note: \(\lambda\) is a fitting parameter (not wavelength) that controls the strength of beam hardening. If \(\lambda = 0\) then there is no beam hardening. If $= $ then \(\langle \mu \rangle = 0\)
Limitation: Because we cannot easily separate \(x\) and \(\mu\) we know that in the ideal case \(-ln(I/I_0) = \int \mu dx\) thus here we need to make the assumption that materials have about equal \(\mu\) (i.e.) that \(\mu\) doesn’t contribute to beam hardening (which we know not to be true because bone will harden more than water). But we’ll make this assumption just for this demonstration.
='monochromatic')
plt.plot(mu, mu, label='polychromatic')
plt.plot(mu, beamHardening(mu), label 'ideal attenuation coefficient [cm^-1]')
plt.xlabel('meaudred attenuation coefficient [cm^-1]')
plt.ylabel(
plt.legend() plt.show()
from skimage.data import shepp_logan_phantom
from skimage.transform import radon, rescale, iradon
= shepp_logan_phantom()
p = rescale(p, scale=1, mode='reflect', multichannel=False)
p = np.linspace(0., 360., 360, endpoint=False)
theta = radon(p, theta=theta)
r = iradon(r, theta=theta)
ideal = beamHardening(r, lam=0.1) #here we cannot separate mu and penetration distance so we have to assume them to be equal
corrupted_r = iradon(corrupted_r, theta=theta)
uncorrected = plt.subplots(1, 2, figsize=(12, 6))
fig, (ax1, ax2)
"Original")
ax1.set_title(=plt.cm.Greys_r)
ax1.imshow(ideal, cmap
"Simulated Beam Hardening")
ax2.set_title(=plt.cm.Greys_r)
ax2.imshow(uncorrected, cmap
fig.tight_layout() plt.show()
=ideal[:,int(ideal.shape[0]/2)]
idealLine=uncorrected[:,int(ideal.shape[0]/2)]
uncorrectedLine/=max(uncorrectedLine) #normalized for easier comparison
uncorrectedLine
plt.plot(idealLine)
plt.plot(uncorrectedLine)"ideal","uncorrected"])
plt.legend([ plt.show()
Comparison to Kak and Slaney ch. 4 method
Used by Chabior 2011
= idealLine/uncorrectedLine
correctionLineFunc plt.plot(correctionLineFunc)
This is clearly
Let’s assume a 0th order correction, a polynomial with degree 1, \(N=1\) \(p = \sum_{n} cq^{n} = c_0 + c_1 q\), but we assume \(c_0 = 0\) so \(p=cq\)
plt.plot(idealLine)
plt.plot(uncorrectedLine)*correctionLineFunc)
plt.plot(uncorrectedLine"ideal","uncorrected",""])
plt.legend([ plt.show()
print(correctionLineFunc.shape)
print(corrupted_r.shape)
= np.zeros([corrupted_r.shape[0], corrupted_r.shape[1]])
pc for i in np.arange(1,corrupted_r.shape[1]):
= corrupted_r[:,i]*correctionLineFunc pc[:,i]
(400,)
(400, 360)
= iradon(pc, theta=theta)
chabior_corrected
= 'gray')
plt.imshow(chabior_corrected, cmap "Chabior method applied to CT")
plt.title( plt.show()
In our first imaging example we will start by fitting one polynomial fit function to applied to uniformly to every pixel in our 400x400 pixel image.
We can generate our new Vandermonde matrix, let’s start with \(n=3\) monomial terms, since each image has \(400^2=160000\) pixels our matrix will look like
\[\begin{bmatrix} 1 & x_0 & x_0^2 \\ 1 & x_1 & x_1^2 \\ \vdots &\vdots &\vdots \\ 1 & x_m &x_{15999}^2 \\ \end{bmatrix}\]def poly_recon(A, theta):
"""Argument is projection vandermonde matrix [proj.ravel() X view angles] and theta array, returns recon.ravel() X deg matrix of vandermonde recons"""
=A.shape[1]
deg= int(A.shape[0]/theta.size)
sz =A.reshape([sz,theta.size,deg])
A= np.zeros([sz*sz,deg])
f for i in range(deg):
=iradon(A[:,:,i],theta=theta, circle=True)
fi=fi.ravel()
f[:,i]return f
def vanderRecon(proj, theta, deg):
"""Argument is [xpix*ypix X theta] projection, returns [recon.ravel() X deg] matrix of vandermonde reconstructed projections"""
=np.vander(corrupted_r.ravel(),deg, increasing=True)
Areturn poly_recon(A,theta);
Let’s confirm that the size of the Vandermonde matrix of our recons is what we expect…
=3 #set number of monomial terms
deg= vanderRecon(corrupted_r, theta, deg)
A print(A.shape)
(160000, 3)
= np.linalg.matrix_rank(A)
A_rank print(A_rank)
3
Now let’s define our correction function once we determine our coefficients and a loss function to define our mean squared error MSE
def mse(y_hat,y):
return ((y_hat-y)**2).mean()
=8 #set number of monomial terms
deg= vanderRecon(corrupted_r, theta, deg)
A = np.linalg.lstsq(A,ideal.ravel(),rcond=None)[0]
c =np.polynomial.polynomial.polyval(corrupted_r, c).reshape(corrupted_r.shape)
corrected_proj
= iradon(corrupted_r, theta=theta, circle=True)
uncorrected = iradon(corrected_proj, theta=theta, circle=True)
corrected = plt.subplots(1, 3, figsize=(12, 6))
fig, (ax1, ax2, ax3)
"Original")
ax1.set_title(=plt.cm.Greys_r)
ax1.imshow(ideal, cmap
"Simulated Beam Hardening \n MSE = " + str(mse(uncorrected,ideal)))
ax2.set_title(=plt.cm.Greys_r)
ax2.imshow(uncorrected, cmap
"Linearized \n MSE = " + str(mse(corrected,ideal)))
ax3.set_title(=plt.cm.Greys_r)
ax3.imshow(corrected, cmap
fig.tight_layout() plt.show()
=corrected[:,int(ideal.shape[0]/2)]
correctedLine
plt.plot(idealLine)
plt.plot(uncorrectedLine)
plt.plot(correctedLine)"ideal","uncorrected","corrected"])
plt.legend([ plt.show()
#Now let’s add a spatially varying component#
import random
def add_lines(r):
= int(random.random()*40)
n_lines for i in range(n_lines):
= random.uniform(0.5,1)
line_intensity = int(10*random.random())
line_width = int(random.random()*r.shape[0])
line_location +line_width,:]*=line_intensity
r[line_location:line_locationreturn r
def corruptProjection(proj, bh_strength = 0.01, spatialDependence_strength = 1):
= np.ones(proj.shape)*(1-spatialDependence_strength) + add_lines(np.ones(proj.shape))*spatialDependence_strength
m = beamHardening(proj*m, lam = bh_strength)
corrupted_r return m, corrupted_r
=corruptProjection(r, bh_strength=0.1)
m, corrupted_r=plt.cm.Greys_r)
plt.imshow(corrupted_r,cmap
plt.show()100])
plt.plot(corrupted_r[:, plt.show()
= iradon(corrupted_r, theta=theta, circle=True)
uncorrected =plt.cm.Greys_r)
plt.imshow(uncorrected,cmap plt.show()
Numpy has a built in 2D vandermonde matrix that we can use here
https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.polynomial.polynomial.polyvander2d.html
def vanderComborecon(q,m,theta,deg):
= np.polynomial.polynomial.polyvander2d(q.ravel(), m.ravel(), [deg-1, deg-1])
qm return poly_recon(qm,theta)
= 6
deg = vanderComborecon(corrupted_r,m,theta, deg)
f print("The Vandermonde matrix has a shape of [{0:d}, {1:d}] and a condition number of: {2:.0f}".format(f.shape[0], f.shape[1], np.linalg.cond(f)))
The Vandermonde matrix has a shape of [160000, 36] and a condition number of: 3367960979906
Note that the condition number of the \(qm\) matrix is very large, meaning that it is pretty poorly conditioned, thus our algorithm will be very sensitive to different input values. Is there a way to decrease the condition number and better condition our problem?
Condition number is a measure of the sensitivty of a problem to error in input. A large condition number number indicates an unstable result that could vary drastically depending on minor errors in the input.
A large condition number is a known problem of vandermonde matrix inversions.
https://en.wikipedia.org/wiki/Condition_number
=np.linalg.lstsq(f,ideal.ravel(),rcond=None)[0]
c
plt.plot(c) plt.show()
Looking at our coefficients above and how large and widely varying they are is a result of the problem being poorly conditioned, not only do these coefficients vary widely with small changes in input (try re-running the program again with noise added), but also the coefficients aren’t very intuitive at all.
def square(c):
return c.reshape([int(np.sqrt(len(c))), -1])
def performCorrection2D(corrupted_r, m, c):
return np.polynomial.polynomial.polyval2d(corrupted_r.ravel(), m.ravel(), square(c)).reshape(corrupted_r.shape)
Let’s visualize the coefficients to see what basis images contribute the most
Recall the correction is \(\hat{p} = c_{i,j} q^i M^j\) thus the coordinates in the image below indicate the monomial and its relative contribution to the polynomial
def correctAndDisplay(corrupted_r, m, c, theta):
= performCorrection2D(corrupted_r, m, c)
p = iradon(p, theta=theta, circle=True)
corrected =plt.cm.Greys_r)
plt.imshow(corrected, cmap= mse(corrected,ideal)
ms_error "Corrected \n MSE = " + str(ms_error))
plt.title(return ms_error
correctAndDisplay(corrupted_r, m, c, theta)
0.00014014461661890804
def visualizeCoefficients(c):
plt.imshow(square(c))'M')
plt.xlabel('q')
plt.ylabel(
plt.colorbar()'Monomial contributions')
plt.title(
visualizeCoefficients(c)
= performCorrection2D(corrupted_r, m, c)
p = iradon(p, theta=theta, circle=True)
corrected = plt.subplots(1, 3, figsize=(15, 12))
fig, (ax1, ax2, ax3)
"Original")
ax1.set_title(=plt.cm.Greys_r)
ax1.imshow(ideal, cmap
"Corrupted \n MSE = " + str(mse(uncorrected,ideal)))
ax2.set_title(=plt.cm.Greys_r)
ax2.imshow(uncorrected, cmap
"Corrected \n MSE = " + str(mse(corrected,ideal)))
ax3.set_title(=plt.cm.Greys_r)
ax3.imshow(corrected, cmap
fig.tight_layout() plt.show()
Not bad results for 10% beam hardening!
Finally, is there a pattern that emerges in the monomials that are activated?
def find2Dpolynomialcoefficients(corrupted_r, m, theta, deg):
= vanderComborecon(corrupted_r,m,theta, deg)
f return np.linalg.lstsq(f,ideal.ravel(),rcond=None)[0]
def plotAndVisualizeDegreeDependence(corrupted_r, m, theta, maxDegree=10):
= []
ms_error for d in np.arange(2, maxDegree):
= find2Dpolynomialcoefficients(corrupted_r, m, theta, d)
coefficients
=[14,4])
plt.figure(figsize1,3,1)
plt.subplot(
visualizeCoefficients(coefficients)1,3,2)
plt.subplot(= correctAndDisplay(corrupted_r, m, coefficients, theta)
err
ms_error.append(err)1,3,3)
plt.subplot(2, d+1), ms_error)
plt.plot(np.arange('Polynomial degree')
plt.xlabel('MSE')
plt.ylabel(
plt.show()
plotAndVisualizeDegreeDependence(corrupted_r, m, theta)
It’s clear that MSE decreases and asymptotes as the polynomial degrees increase (but at the cost of
How do spatial dependence strength vs beam hardening strength control monomial activation map?
Try this again when there is no spatial component to the beam hardening, how does this affect the M dependency?
= corruptProjection(r, bh_strength=0.1, spatialDependence_strength = 0)
noSpatialm, noSpatial_corrupted_r plotAndVisualizeDegreeDependence(noSpatial_corrupted_r, noSpatialm, theta)
Finally repeat once more with Beam hardening strength set to zero and spatiall varying strength on max, how does this affect the coefficient activation map?
= corruptProjection(r, bh_strength=0, spatialDependence_strength = 1)
noBHm, noBH_corrupted_r plotAndVisualizeDegreeDependence(noBH_corrupted_r, noBHm, theta)
Observe how the MSE decreases then increases at higher degrees. This demonstrates a weakness of Vandermonde polynomial interpolation that as the Vandermonde matrix becomes larger its condition number grows dramatically and the result becomes unstable.
Move on to the last notebook on iterative methods for solving the Vandermonde interpolation least squares problem: https://colab.research.google.com/drive/1kErNKAbJlJUpq2jsb0roMJA7kTNB9vY0